Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We develop a framework to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL) using the on-policy Proximal Policy Optimization (PPO) algorithm which shows better stability than other algorithms and can steadily improve the policies pre-trained with IL. We show that the combination of IL and RL match human performance and that the artificial agents trained with our approach can quickly adapt to reward distribution shift. We finally show that good performance and robustness to reward distribution shift strongly depend on combining allocentric information with an egocentric representation of the environment.more » « less
-
Abstract Behavioral data shows that humans and animals have the capacity to learn rules of associations applied to specific examples, and generalize these rules to a broad variety of contexts. This article focuses on neural circuit mechanisms to perform a context‐dependent association task that requires linking sensory stimuli to behavioral responses and generalizing to multiple other symmetrical contexts. The model uses neural gating units that regulate the pattern of physiological connectivity within the circuit. These neural gating units can be used in a learning framework that performs low‐rank matrix factorization analogous to recommender systems, allowing generalization with high accuracy to a wide range of additional symmetrical contexts. The neural gating units are trained with a biologically inspired framework involving traces of Hebbian modification that are updated based on the correct behavioral output of the network. This modeling demonstrates potential neural mechanisms for learning context‐dependent association rules and for the change in selectivity of neurophysiological responses in the hippocampus. The proposed computational model is evaluated using simulations of the learning process and the application of the model to new stimuli. Further, human subject behavioral experiments were performed and the results validate the key observation of a low‐rank synaptic matrix structure linking stimuli to responses.more » « less
An official website of the United States government
